Forgot your password?
typodupeerror

Comment AI, Animation, and Firefly's reboot. (Score 1) 114

A lot of people in this thread seem stuck on "yay, Firefly, boo cartoon, no AI thank you" as if animation and AI are automatically some kind of downgrade. I think that gets this exactly backwards.

Fillion has to thread a bunch of needles to make this project work. The timeline placement is not just a continuity dodge. It is also the one place where the project can still use Wash without crashing headlong into the fanbase's cognitive dissonance from the Serenity film, but it also slams into a different problem, the actor existence failure problem, where an important character outlives his canon actor. Live action does not solve either of those problems, without creating even more cognitive dissonance. Live action just adds budget pain, aging actors, and the awkward spectacle of trying to pretend twenty-plus years have not passed. Don't misconstrue me, here, I am not saying it can't be done -- ST:SNW is pretty much walking that exact line right now, and doing it superbly, though I think it was a very lucky accident, not Paramount's tactical genius.

But Fillion's company does not have the deep, deep coffers that Paramount does. He needs something that pleases existing fans, doesn’t scare off new ones, can actually be financed, and allows him to address the death of a key actor in a way that doesn't alienate the fanbase. He knows what he's up against -- a franchise still haunted by a film that put not one, but two bullets into the existing fandom’s emotional center of mass. I'm fairly certain Whedon wanted Serenity to kill off any idea of a reboot. This is not going to be easy. Fillion has his work cut out for him.

And yes, that brings us straight to the AI minefield. Under current California law (AB 1836), which Fillion's company Collision33 and their partner Disney must abide by, the Ron Glass estate has total veto power and financial claim over his vocal likeness. I am not seeing the AI-fucks-actors moral apocalypse here. If his family chooses to treat AI as a digital legacy tool and are guaranteed compensation, what is the problem?

We’re talking about a high-fidelity preservation of a performance style, not a replacement of the performer. It’s less deepfake, and more digital restoration of a voice we already lost. If it worked for James Earl Jones and Darth Vader, why can't it work for Ron Glass and Book?

I know this is not a universally shared view, especially on Slashdot, where reports of the use of AI in *any* project is an open invite to anti-AI drive-by trolls, but the idea that AI-assisted recreation of an actor's recorded voice is somehow a disqualifying sin strikes me as backwards. Ron Glass did not take his talent with him to the grave. Glass' talent is still with us, preserved in recordings, from Barney Miller to Firefly. We still cherish it precisely because it was captured. If his family signed off, the estate was compensated, and the use was clearly disclosed, I would not see an AI-assisted vocal reconstruction as "fake." I would see it as one more tool for preserving a performance tradition that the medium itself made possible.

Hollywood -- writers, studios, actors, fans, everybody in this ecosystem -- is going to have to come to terms with this whether it likes it or not. AI is not just a productivity gimmick or a cost-cutting toy. Used well, it can also be a preservation tool. The blanket claim that AI recreation of a deceased actor is inherently disrespectful makes about as much sense to me as saying film restoration is disrespectful because the original negatives aged.

Fwiw, I still think the safe-pocket-in-the-timeline move is doing a lot of the creative heavy lifting here, but I do love the fuck-you-Fox signal that the Athenia pilot is sending -- it actually respects the episode order that Whedon intended for the series that Fox ignored because they wanted more humor and action in the opener to reduce the risk of scaring off their target demographic. I'm not out on a limb, here -- it was that target demographic's lack of sophistication, and Fox's very lucrative history of pandering to it, that doomed Firefly's run. And yes, I still think invoking Whedon's blessing for this project is tactically clumsy; baggage added to a pitch that already has enough risk baked in. But it's Fillion's call, and I'll back him, because I'd like to see the series rebooted, and he's the guy in a position that could make it happen.

I am fairly certain that Fillion's choice to use animation, and to honor Ron Glass's voice by recreating it (either by a voice actor or by AI) do not justify the anti-animation, anti-AI drive-by trolling permeating this thread. AI and animation are part of the solution set Fillion is trying to find to get this Firefly project out of his head and into our lives again. If AI can help preserve the presence of a performer people loved, with consent and transparency, then treating that as some kind of moral apocalypse seems bonkers to me. At the end of the day, for a Firefly reboot, animation and AI isn't just a budget call -- it’s a recovery project. It’s the original cast taking back the controls from the executives who steered them into a ditch two decades ago. If AI and ShadowMachine -- an animation studio with multiple awards, including an Oscar -- are the tools Fillion needs to get Firefly going again, I'm all for it.

Comment Re:Good idea, I'm on board (Score 2) 114

Sorry to burst your bubble man, but we are way beyond the "wouldn't it be cool if Trigger did it?" stage. Honestly, I was hoping Fillion had lined up Pixar. In reality, Fillion has already got a studio, and they've already produced concept art, and it looks pretty good. Not cartoonish at all, so your anime dreams are dead on arrival. The studio is Shadowmachine, and they are not some low-bid offshore spec-animation mill. Not Pixar, sadly, but definitely playing in the same league. They're the studio behind Guillermo del Toro's "Pinocchio" (2023 Oscar for best animated film) and the cult-favorite BoJack Horseman (2020 Critic's Choice Best Animated Series). This strongly suggests Fillion is aiming a little higher, and for a little more relevance, than a nostalgia cartoon. Seriously, Cowboy Bebop was 28 years ago, ffs. But by all means, keep casting the anime version in your head; the rest of us are happy to go with Shadowmachine.

Comment Here's hoping, but... (Score 3, Insightful) 114

...what worked for rescuing an IP hopelessly mired in canon with a devoted fanbase is (probably) not gong to work for Firefly. I'm thinking of Paramount's against-all-odds successful reboot of ST:TOS with the canon-drenched, near-peer ST:SNW. I think that was a lucky accident, not a tactical call by paramount. Strange New Worlds landed in that narrow band where canon could be respected without being strangled by it. And I'm calling it a lucky guess, not a deliberate strategy, because of what happened with Enterprise. Enterprise fled so far into the past that it felt less like a prequel, and more like a deliberate attempt to get out from under the fanbase's radar. A dick move, basically, and the fandom responded accordingly.

This new Firefly cartoon feels like the same kind of maneuver: find a nice soft continuity pocket between the series and Serenity, tuck the story in there, and hope nobody notices that safe timeline placement is doing a lot of the creative heavy lifting. It is basically a safe-space reboot for a property whose closure-granting movie was unusually clean and graceful, but still managed to piss off the series' fanbase.

So I have a couple of questions:

Does Fillion really think that this is going to mollify the still-pissed-off chunk of the fanbase? "Oh, hey, look it's Book. He's going to die, you know. so is Wash. Pretty soon, too, if canon is any indication. But let's watch anyway, right?" I can see that fanbase already rolling their eyes and sharpening their social-media swords for the premier. TBH, i should say, "if that premier actually happens." Fillion still needs a distributor. And to a distributor like Netflix or Amazon, a reboot of a cult classic is a risky asset, and this particular package that Fillion has pulled together even more so, because of the cognitive dissonance "Serenity" is going to generate in potential viewers familiar with the canon storyline.

And why a cartoon? Animation is not a magic defibrillator for a beloved science-fiction property flatlined by executive meddling. Babylon 5 already reminded us that an animated return can be perfectly respectable and still not reignite the old fusion torch. And Paramount's casting of (relatively) fresh new talent for canon-drenched characters shows that you don't need to resort to animation to preserve the look-and-feel of a series, decades after the original actors aged out of their canon roles. Seriously, who'd a-thunk an installment in the Trek universe where a no-name actor replaced Shatner as Captain Kirk would actually succeed? Yeah, I know...caught me by surprise, too. :)

Maybe Fillion and company can pull it off. I hope they do. I liked Firefly, and I really liked Serenity the movie. But "an animated series we found a safe pocket in the timeline for" is not, by itself, a reason to believe lightning will strike twice.

And one more question: If Whedon is not creatively involved, why foreground his blessing at all? All it does is drag a creator's baggage into every pitch meeting for a project that already has enough risk baked in. I want to see the project land a distributor, but this is not a tactically sound move by Fillion.

Comment Linux gaming still depends on Microsoft... (Score 2) 35

It makes perfect sense that CachyOS is dominating the ProtonDB charts right now. As the article points out, they are basically pre-packaging the tweaks, driver configurations, and heroic duct tape that the rest of us have spent years applying by hand. But while we're looking at these adoption numbers, we need to be honest about what we are actually cheering for.

The harder and more honest argument is that desktop Linux gaming is still, to a depressing extent, a compatibility story -- not a first-class commercial platform. Proton is not some triumphant proof that Linux gaming has arrived on its own terms. It is Valve's game-tuned Wine fork, wrapped in DXVK, vkd3d-proton, Steam runtime glue, and a thousand game-specific evasive maneuvers to keep Windows binaries from faceplanting on a non-Windows OS. That is technically impressive. It is also an admission that the center of gravity is still Microsoft, and their bloated OS.

Linux gaming advocates keep pointing at ProtonDB as if it settles the argument. It does not. It is a compatibility scaffold, not a native ecosystem. It exists because the commercial desktop gaming world still targets Windows first, last, and always, and gaming on Linux survives by translating, shimming, wrapping, and shaking a dead chicken in the general direction of Redmond. I am not speaking theoretically here. I built a high-end Ubuntu gaming rig around a 4090 a year ago, and spent weeks discovering that "runs on Linux" often means "runs after enough ritual incantation."

The worst bugs weren't even game-related. It was my audio stack. First, Linux did an embarrassingly bad job managing multiple audio streams at different bitrates, something native Windows apps have no issue with at all. Even with pipewire/pulseaudio, I had to do major surgery in both my audio stack and in my Steam launch strings to get clean audio when I had Spotify or Winamp running in the background instead of the game's native music. Second, my HDMI-connected soundbar vomited a bogus EDID payload into the DRM path advertising a phantom VGA display. Because EDID is an ancient compatibility sewer that still gets to vote, the kernel decided that the imaginary monitor hanging off the soundbar was the primary display, not my actual Samsung Odyssey G8 on DP-1. So on boot, the desktop went into the void until I physically disconnected HDMI. The fix was not a setting, not a package, not a friendly little checkbox. The fix was kernel surgery: patching drm_edid.c so the kernel would stop believing the lies my soundbar was telling it. That is not a consumer gaming experience. That is field-expedient systems archaeology.

And that is why I remain skeptical whenever people talk about Linux gaming winning the battle for gamers' hearts and minds on the general-purpose desktop. The article explicitly mentions Bazzite and Chris Titus. I saw both his review and JayzTwoCents' take on Bazzite on YouTube, and I honestly couldn't stop laughing. Their hearts are in the right place, but watching them attempt to frame Linux gaming as a seamless, drop-in replacement for Windows is both amusing and misguided. They are deliberately hiding the complexities of Linux-anything compared to the sheer ease of Microsoft Windows. They are promoting an illusion of parity, not championing parity itself.

Full disclosure: I got Doom running on a Slackware distro when I was an undergrad CS student way back in 1994, but it was an actual port. I got sucked into the compatibility world several years later, trying to get Diablo to run under wine on my Mandrake distro. I spent a lot of time surfing comp.os.linux and browsing tsx-11 and sunSITE for anything that would help. I gave up in frustration and I spent the next couple of decades contentedly gaming on Windows via steam. I switched away finally last year when Microsoft's aggressive telemetry became too much to tolerate, and I got hit by that borked windows 11 upgrade path last year that actually bricked my gaming box. It has become increasingly clear that the actual way forward for Linux is curated appliances and forks: Android phones, SteamOS-style consoles, handhelds like the ones Bazzite targets, locked-down vendor stacks, and other environments where somebody else absorbs the compatibility blast radius.

If you stay perfectly within those curated lines, it's fine. But the moment you step off that path to push high-end hardware, you are back in the trenches. On my rig, getting MechWarrior 5, Cyberpunk 2077, and Horizon: Zero Dawn to behave required a dissertation-length launch string on Steam. Here is what it takes running against Steam's GE-Proton9-20 compatibility layer:

PULSE_PROP="channel-map=front-left,front-right,rear-left,rear-right,front-center,lfe,side-left,side-right" NVPRESENT_ENABLE_SMOOTH_MOTION=1 VKD3D_DISABLE_EXTENSIONS=VK_KHR_present_wait PROTON_ENABLE_NVAPI=1 VKD3D_CONFIG=dxr11,dxr VKD3D_FEATURE_LEVEL=12_2 ~/.local/bin/run-with-vibrance.sh 256 gamescope -f -W 6144 -H 3456 -r 120 --force-grab-cursor -- %command% --launcher-skip

See what I mean? You are not just playing a game; you are performing postmodern systems integration on a consumer entertainment product. Proton is impressive engineering, but let us not pretend it is normal consumer software: any platform that asks me to hand-feed PULSE_PROP, VKD3D_CONFIG, PROTON_ENABLE_NVAPI, a custom vibrance wrapper, and a 6144x3456 Gamescope envelope before I can go shoot stuff in 6k at 100 FPS has not solved gaming on Linux. It just makes the ritual reproducible.

Submission + - Babylon Five is now free to watch on YouTube (cordcuttersnews.com)

sandbagger writes: Eep! The amazing series Babylon 5 is now free to watch on YouTube! For those not in the know, in the mid-23rd century, the Earth Alliance space station Babylon Five, located in neutral territory, is a major focal point for political intrigue, racial tensions, and a major war as Earth descends into fascism and cuts off relations with its allies.

Comment How to move the goal posts, Musk style (Score 1) 157

SpaceX timeline on how to walk something back (with receipts):

2019-06-23: “Occupy Mars.”
2019-06-24: “Moon 1st”
2026-02-09: “Shifted focus” to a “self-growing city” on the Moon “overriding priority” “Moon is faster.”
2030: Occupy LEO. (the quarterly OKR edition)
2032: Occupy the goalposts. (now in cislunar orbit)

Comment Re:Vibe coding is a lagging indicator (Score 1) 106

The real risk isn’t that a generation can’t read code. The risk is that we stop expecting them to. If we treat LLMs as training wheels instead of prosthetic eyesight, we get a generation that ships faster and understands deeper. If we treat them as a replacement for learning, we get brittle systems and brittle people. That’s not a technology outcome. That’s a cultural and educational choice.

The ability to read and write code without support was in decline long before LLMs - FizzBuzz as a low bar dates from Spolsky in 2005.

If we're hoping for the right cultural and educational choices to save us ... we're screwed.

I don’t disagree with your pessimism regarding the low bar, but I think you are missing a key distinction between developers and coders. When I was hired as a sysadmin by Raytheon three decades ago, the interview wasn't a test of whether I could follow a manual; my maths-heavy CS degree already checked the regurgitation box. The interview was about the size of Window's symbol table, the differences between Windows and Unix thread management, and a healthy dose of formal logic —the "minimum number of weighings to find the light marble" type of stuff. I got exactly one coding question, but it was Towers of Hanoi, not FizzBuzz. The distinction matters because it separates the sysadmin from the tech support worker, or to bring this back on point, the developer from the coder. FizzBuzz is a sanity check to see if you can handle syntax; Towers of Hanoi is a gateway to see if you actually grok recursion, state, and the stack. I agree with you here -- this problem started with script-kiddies savvy enough to download a hack getting hired as junior coders at some dotcom startup. But -- if you learn to code without learning formal logic, recursion, or memory management, you aren't a practitioner any more than a tech support worker is a sysadmin.

We aren't just losing literacy; we're losing the ability to maintain the scale that makes software valuable in the first place. Vibe coding allows a generation of low-education tech support-level coders to bypass the surfaces where the sysadmin-level professional developers of the open-source world live—documentation, bug reports, and community interaction. I think this distinction is important, because until AGI is realized (NB: LLMs are part of that, but they can't get there on their own IMHO) the entire FOSS community is on a fast track to a tragedy of the commons. The real risk is that vibe coding turns the entire industry into a giant Zendesk-style help desk...you aren't a practitioner; you're just a ticket-closer following an AI-generated script. If we stop rewarding the Towers of Hanoi level of literacy, the paper's software-begets-software loop works in reverse and FOSS dies.

Comment Re:Reading is made easier by technology (Score 1) 73

Reading has been my main hobby for over 50 years.
The last few decades it has become easier due to eBooks.
My failing vision is less of an issue, and the cost is a lot more reasonable.
I think that if children are rationed on screen time, that should not include reading ebooks.

I read mostly Science Fiction, and a little bit of history.
Preferably Space Opera/Galactic Empire/ space fleet and alternative history.

"mega-readers (11+ books per year)"

LOL I have read at least 5 books so far this month (from NEW to READ) plus reread some old classic by authors like RAH

It's great to run into a fellow traveler. I'm plowing my way through Iain M. Banks' Culture series and Neal Stephenson's Baroque cycle right now. I just finished the Honorverse, and before that it was a nostalgia tour through the Sprawl trilogy, the Dune novels (Frank's, not his son's cash grab stuff) and yes -- I periodically revisit RAH's Future History and Tolkien's Middle Earth. :) I like to go on reading quests -- ever since I stumbled across the wikipedia page for hugo award winning novels twenty years ago. All the Hugo awards winners, and then the top three finishers, then the Nebula Awards and their top 3 finishers, etc. I used to average about a book a week, but even technology can't fix my failing eyesight, though it helps a lot. I'm down to two or three a month now. I like history too, but for me it's cultural history -- I'm reading my way through the Five Foot Bookshelf as a side project. In today's greed-driven attention economy, the classics are arguably "alternate" history -- what was, and what still could be.

Comment GPL allows scraping, but not extraction (Score 1) 106

I had a fairly negative knee-jerk reaction to all the “AI scraping FOSS for training data is bad" comments surfacing in this thread. I realize that was because my FOSS instincts are still basically Stallman-era: public code is the point, reuse is the point, and the GPL exists to make sharing legally certain. If you put code out there under an open license, people reading it, learning from it, and building on it is not a bug. It’s the whole design. That should (and maybe legally does) include using open source code to feed your LLM. So if you’re like me and your first reaction is “scraping public FOSS is literally what we signed up for,” you are in good company. But after reading the thread and thinking it through, I think the better framing isn’t “is this allowed?” It’s “does this break the return channels that keep the commons alive?”

Classic open source reuse is coupled to participation signals. Humans learn from your repo, then they show up as users and peers. You get docs traffic, issues with repro steps, patches, bugfixes, packaging, wiki edits, community visibility, consulting leads, hiring signals -- many of which are monetizable. Even in the Red Hat world, the monetization was still tied to maintaining the commons: integration, QA, lifecycle guarantees, security backports, somebody answering the pager when the servers went south. The value loop stayed connected to the people doing the work. LLM training and agent-mediated coding can make that loop one-way, though. The paper shows why AI and FOSS are very likely on the fast track to a tragedy of the commons. Under copyleft, the theory has always been that reciprocity is the fence: distribute a derivative, inherit obligations. But LLMs don’t look like binaries, and distribution doesn’t look like shipping a tarball anymore. A company can potentially ingest a whole lot of FOSS pasture, sell the milk, and never walk through the gate where the shepherds are, not because they’re evil, but because the GPL’s reciprocity levers don’t map cleanly onto model weights the way they map onto distributing a program.

GPL’s big lever is reciprocity: if you distribute a derivative program, you have obligations to provide corresponding source, preserve notices, and so on. The legal question becomes: does a trained model, or the code it emits, count as a derivative work of the code it trained on in a way that triggers copyleft duties? That’s still legally murky. The recent Copilot litagation is a good example of why: the court forced plaintiffs to tie their claims to concrete legal hooks. Their DMCA-based missing attribution / removed copyright info theory was narrowed and ultimately dismissed because the outputs weren’t alleged to be identical copies with stripped notices, while contract and license-based theories were allowed to continue. The case doesn’t answer the “are model weights a derivative work?” question, but it does show how hard it is to make the old levers work on new artifacts, and why this remains unsettled instead of obvious.

The FSF is already trying to adapt the freedom framework to machine learning by arguing, in effect, that a genuinely “free” ML system should come with the materials that make the four freedoms real in practice, including access to the training data and related components, not just a black-box service and a license FAQ. The FSF's push for access to training data is a recognition of the paper's software-begets-software conclusions -- If seeds (training data) are locked away while the harvest (the model) is sold, the feedback loop that built FOSS turns into a one-way extraction valve. This is a direct acknowledgment by the keepers of the FOSS flame that machine learning breaks the original model of source available = freedom preserved. I heard Stallman speak several times back in the day, and even met him a couple of times. I suspect he’d nod at the shape of this problem and say: the law can be technically satisfied while the commons gets socially undermined, and we need to close that gap before the incentives rot out from under the people doing the work.

Open source doesn’t die because people reuse it. It dies when reuse becomes extraction at scale with no path back to sustainability.

Comment Re:Vibe coding is a lagging indicator (Score 1) 106

I don't know how the next Python is going to get any traction, if table stakes for adoption is "language is understood by LLMs".

That’s a real constraint, but it’s not a new one, and it’s not uniquely LLM-shaped. Every language that ever got traction had to clear table stakes that weren’t technical purity: documentation quality, tutorials, books, community examples, tooling, package ecosystem, and the ability for a new user to get from zero to “it runs” without burning a week. LLMs just become another on-ramp, not the whole highway. The paper actually argues that vibe coding isn't a "lagging indicator"—it’s a high-velocity shock. The authors show that adoption accelerates sharply once a usability threshold is crossed, meaning the "next Python" won't just struggle with LLM data; it will struggle because the paper's software-begets-software feedback loop is hitting a friction point. We’ve relied on a healthy ecosystem of existing code to lower the cost of building the next thing. If vibe coding starves the maintainers of that ecosystem, the foundation for the "next Python" evaporates before the first LLM even sees it.

The current generation of coders won't use it if their LLM of choice doesn't understand it.

Some won’t, sure. Some also won’t use a language without a formatter, a linter, a debugger, a good standard library, or a GitHub footprint with 10k stars and a massive tree of downstream dependencies. But “won’t” is not a law of nature. It’s a product of incentives and education. The way you get adoption is the way we always got adoption: make the first hour pleasant, make the first week productive, and make the first month feel like momentum. An LLM can help with that, but so can excellent docs and tools. Excellent docs and a senior coder with an open door policy still beat an hallucinating chatbot.

LLMs won't understand it if there's no training data, which comes from users.

This is the part I agree with, with one missing piece: humans are in the exact same boat.

Programming languages have always had a training-data problem, because developers are not born with K&R hardwired into their cerebrum. They learn from corpora: textbooks, classes, tutorials, examples, codebases, mentors, review comments, and the slow accumulation of “how we do it here.” That’s literally why public schools exist, why CS departments exist, why apprenticeships and code review exist. Human understanding -- like an LLM -- depends on a huge, chaotic pile of prior artifacts that somebody took the time to order and categorize. So yes: LLMs that support developers need training data. This isn't a special indictment of LLMs; it’s the basic economics of learning for everyone—carbon-based or silicon-based.

I've always told people that coding would be automated last, if ever.

Reasonable prediction, because “coding” used to mean more than typing: it meant translating vague intent into precise behavior, testing assumptions, handling edge cases, debugging, maintaining, and integrating. What’s changed is that the typing and scaffolding part got cheaper, and the translation layer got surprisingly good. The hard parts -- translating vague intent into precise behavior and handling edge cases -- didn’t disappear. The boundary just moved. What the paper is really warning us about is a tragedy of the commons. When we use AI to graze on open source software without leaving the footprints (bug reports, documentation visits) that maintainers monetize, we strip-mine the soil. The result isn't just automation; it's an increase in low-quality code that serves a creator's immediate needs but consumes volunteer time to review and fix. NB: low-quality doesn't mean broken in the sense of a compile error. It means code that solves one user's immediate problem but is economically irrelevant to everyone else. LLMs are essentially weaponizing "if it ain't broke, don't fix it" to produce mountains of code that works just well enough to avoid human review, but not well enough to sustain the ecosystem. We’re at risk of trading a robust public pasture for a billion private, brittle window-boxes.

Apparently I was wrong, and will have to settle for being part of the last generation of coders that can actually read and understand code without LLM support.

This is where I push back hard, because it smuggles in a defeatist conclusion that doesn’t follow from the premises.

Humans still have to read and understand code. If anything, the ability becomes more important when code gets cheaper to produce. When output floods the zone, literacy matters more, not less. The paper points out that in our current baseline, users already receive roughly 1000 times more value than developers spend creating it. The big lever shifting the Nash equilibrium isn't that scale, but the demand-diversion channel. Vibe coding allows users to keep capturing that 1000x value while simultaneously stealing the engagement-based returns maintainers need to survive. That divergence is exactly what tips a virtuous loop into a recessive spiral.

And “without LLM support” is a choice, not a destiny. We can teach reading code the way we always taught it: deliberate practice, small programs, tracing, debugging, review, and the one that most of my professors used when I was an undergrad: “tell me what this does in plain English.” That's the one that tells them that the mental model is correct, that syntax has turned into meaning. LLMs can be excellent tutors, but they can’t be a substitute for comprehension, for the same reason a calculator can't grok the intuition behind the differential equation you just derived to model the physical system you’re about to simulate. They are a tool that scaffolds comprehension, not one that replaces it.

The real risk isn’t that a generation can’t read code. The risk is that we stop expecting them to. If we treat LLMs as training wheels instead of prosthetic eyesight, we get a generation that ships faster and understands deeper. If we treat them as a replacement for learning, we get brittle systems and brittle people. That’s not a technology outcome. That’s a cultural and educational choice.

Comment Re:Where does innovation come from? (Score 1) 106

Coding skills are no longer a barrier to iteration and improvement. There are so many cool projects out there now being done by people who have an idea and see a business case and want to fill it.

This landed for me because I’m that person, just without the “coder” label.

I’m a sysadmin. If you widen the error bars enough to include shell scripting, sure, I “code,” but I’ve never had the patience or focus to be a real developer. Historically, that meant a lot of ideas stayed in the “would be nice” bucket unless they were directly tied to work and justified the learning curve.

Then I decided I wanted a tiny app to rearrange my desktop icons into a circle. Pure personal itch. Not a paycheck task. In the old world, that’s where the project dies: UI toolkits, window managers, APIs, packaging, all that overhead just to scratch a small aesthetic itch.

With LLM help, it didn’t die. I could iterate. I could ask for a first draft, test it, describe what broke, get a fix, repeat. I stayed in the role I’m actually good at: defining the goal, running the tests, spotting the edge cases, and insisting on reproducible behavior. The model did the “turn intent into code” part fast enough that my limited patience wasn’t the bottleneck anymore.

That doesn’t settle the tragedy-of-the-commons question in the paper. But it’s a real, concrete data point on the other side of the ledger: for people like me, “I’m not a coder” used to be a hard stop. Now it’s more like “I can be a maintainer of exactly one weird little thing I care about.”

Comment This is tragedy of the commons, AI style (Score 5, Interesting) 106

This paper reads like any one of dozens of papers I had to digest for game theory classes back in college. Granted, that was thirty years ago and “optimization theory” has replaced game theory in the course catalogs, but the bones are the same: Nash is still hiding under the floorboards, tapping out equilibria with a broom handle. What the authors are really doing here is describing a potential tragedy of the commons, and dressing it in modern clothing. In their setup, open source is the shared pasture: maintainers are the shepherds doing the unglamorous work of reseeding and mending fence lines, and users are the cows. Vibe coding adds a new kind of cow, one that grazes constantly and at scale while leaving fewer of the footprints that normally pay the shepherds back: attention, bug reports with reproduction steps, patches, docs corrections, donations, consulting leads, the whole informal economy that kept a lot of projects alive. If that return channel dries up, the equilibrium shifts: fewer shepherds bother staying out in the rain, the pasture degrades, and everyone ends up worse off even though the short-term output looks amazing.

Nothing about that is conceptually novel. What’s novel is the pressure profile. I watched Red Hat go from an interesting way to monetize Linux in 1994 to a $34B IBM acquisition a quarter century later, which tells you there’s real money in selling stability, support, and risk management around a free codebase. But this paper is pointing at a different failure mode: not “open source can’t be monetized,” but “open source can be consumed so efficiently that the incentives to maintain it get vacuumed away.” The paper’s real kicker is what they call the software-begets-software effect. We’ve all seen this: a healthy ecosystem of libraries makes building the next tool trivial. That’s a virtuous cycle that helped FOSS explode. But the authors’ math shows this loop has a reverse gear. If vibe coding starves maintainers of the attention currency they need to keep the lights on, the ecosystem doesn't just stagnate—it contracts. Entry falls, variety shrinks, and the cost of building new software starts to climb because the foundation is rotting. We’re essentially using AI to strip-mine the very topsoil we need for the next harvest.

The models in the paper may be a bit too tuned to represent all of FOSS, sure. But where they’re right, they’re right in the way you can't really argue against. If vibe coding siphons off funding, leaving some critical cluster of FOSS coders unwatered long enough, FOSS could be on a fast track to that tragedy of the commons.

Comment Re:Don't be stupid, people (Score 5, Insightful) 47

First, LLM-type AI may not actually be around in any suitable way in a few years. The business numbers are catastrophic.

That’s not an argument, it’s astrology with a spreadsheet. Even if Vendor X faceplants into a crater, the workflow Chris is talking about doesn’t evaporate. These are prompts and scripts that turn “big diff, big context” into small, reviewable chunks. Swap the engine, keep the tooling. The kernel has outlived entire tech empires, compilers, VCSes, “next big things,” and at least three “Linux is doomed” decades. Tools that reduce reviewer fatigue stick around because reviewers keep using them, not because a quarterly earnings call went well.

Also: the LKML thread is about making AI review less magical by structuring it, scoping it, and forcing it to show its work. That’s basically the opposite of “bet the farm on a single vendor’s hype cycle.”

Second, LLM-type AI misses what is really important, namely quality of architecture and interfaces

Correct in the most trivial way possible: lint won’t design your subsystem either -- and nobody claimed it would. This isn’t “let the chatbot be a maintainer,” it’s “use a tool to catch more bugs while humans stay responsible for architecture and interfaces.”

Kernel review is layered. Humans do the high-level “does this belong, does it fit, is the interface sane, does it age well?” work. Tools do the tireless “did you miss a refcount, a NULL check, a lock ordering hazard, a surprising call path” work. Chris is explicitly carving the diff into tasks, extracting call graphs, and even cross-checking lore and Fixes tags. That’s a checklist machine, not an architect. Complaining it’s not an architect is like complaining grep can’t write a better filesystem -- which Chris *obviously* can do... :)

and is[t] bad at finding security problems outside of toy examples.

If your model is “AI must find every non-toy security bug or it’s worthless,” then congrats, you’ve also just declared static analyzers, fuzzers, and humans worthless, because none of them are complete. In reality, we stack imperfect tools and get better outcomes. Syzkaller doesn’t understand architecture either, yet it finds terrifyingly real bugs. Sparse doesn’t grok interfaces, yet it saves us from type and annotation shooting us in the foot. Smatch doesn’t have to grok the dev's intent to catch patterns reviewers miss at 2AM.

AI review is the same category: a probabilistic pattern spotter that can flag suspicious deltas fast, especially when you constrain context, force targeted questions, and make it operate on extracted facts instead of vibes. That’s exactly what this informal RFC is doing, including extra rigor around syzbot reports.

if you don’t want to use the prompts, don’t. But don’t pretend “VC math scary” and “AI isn’t a maintainer” are substantive rebuttals to an RFC, even an informal one, about reducing token waste and catching more bugs with a structured, auditable review pipeline.

Comment Re:"probably. We're not 100% sure about it...." (Score 1) 130

It explicitly notes that "correlation between next-word prediction... and brain alignment fades once models surpass human language proficiency."

There's a hypothesis, but we don't know because they haven't surpassed human language proficiency. Not even close.

Just...no. You are confusing general intelligence with predictive accuracy. The paper defines proficiency specifically as next-token prediction (perplexity). If you had read the paper, you would have known this. How's that for an hypothesis? You don't get to dismiss an argument if you can't even get what you are dismissing right. In that specific metric, LLMs have mathematically surpassed the average human (see Shlegeris et al., 2022, cited in the paper). Unlike you, the paper isn't fantasizing about a future sci-fi AI; it is presenting empirical data on current models (like Pythia-6.9B). It measures that as their predictive accuracy exceeds human baselines, their internal processing mechanisms diverge from human brain activity. It’s a measured fact, not an hypothesis. Thwok -- ball's in your court.

Comment Re:good to have more tools (Score 1) 7

It's always nice to get more information from existing signals. There's already lots of work being done to find meteorites and reentry debris in weather radar signals.

Yep. Weather radar is already the found money channel for meteors and reentry junk, so squeezing a little more truth out of existing signals is always a win.
What made me grin about this article is how old-school the core idea is -- the Mach cone hitting the ground looks like a hyperbola if you plot the difference in arrival times of the signal at any sensors in the debris path. All it takes is some high school level algebra (conic sections ftw!) and a little creative thinking. I once caught a public talk at the Pima Air & Space Museum where an electrical engineer walked us through using USGS seismic stations to suss out a hypersonic track of something flying very fast across the desert southwest. He basically used sonic boom footprints based on this same idea about signal arrival times. His slide deck strongly suggested something doing Mach 6+ between Groom Lake in Nevada and Deer Island in southern California every couple of weeks. The punchline: he gave the whole talk standing under the wing of the museum’s SR-71. I asked during Q&A if that was a coincidence. He smiled: “Nope, not a coincidence.” (Crowd laughed, because of course they did -- Aurora and the X-39 were open secrets at the time.)

Slashdot Top Deals

The reward of a thing well done is to have done it. -- Emerson

Working...